Next | Prev | Up | Top | Contents | Index
Understanding Affinity Scheduling
Affinity scheduling is a special scheduling discipline used in multiprocessor systems. You do not have to take action to benefit from affinity scheduling, but you should know that it is done.
As a process executes, it causes more and more of its data and instruction text to be loaded into the processor cache (see "Reducing Cache Misses"). This creates an "affinity" between the process and the CPU. No other process can use that CPU as effectively, and the process cannot execute as fast on any other CPU.
The IRIX kernel notes the CPU on which a process last ran, and notes the amount of the affinity between them. Affinity is measured as the amount of time the process used the CPU, with 300 microseconds or less having zero affinity, and 10 milliseconds or more having 100% affinity.
When the process gives up the CPU--either because its time slice is up or because it is blocked--one of three things will happen to the CPU:
- The CPU runs the same process again immediately.
- The CPU spins idle, waiting for work.
- The CPU runs a different process.
The first two actions do not reduce the process's affinity. But when the CPU runs a different process, that process begins to build up an affinity while simultaneously reducing the affinity of the earlier process.
As long as a process has any affinity for a CPU, it is dispatched only on that CPU if possible. When its affinity has declined to zero, the process can be dispatched on any available CPU. The result of the affinity scheduling policy is that:
- I/O-bound processes, which execute for short periods and build up little affinity, are quickly dispatched whenever they become ready.
- CPU-bound processes, which build up a strong affinity, are not dispatched as quickly because they have to wait for "their" CPU to be free. However, they do not suffer the serious delays of repeatedly "warming up" a cache.
Next | Prev | Up | Top | Contents | Index